Distributed Zero-Order Algorithms for Nonconvex Multiagent Optimization

نویسندگان

چکیده

Distributed multi-agent optimization finds many applications in distributed learning, control, estimation, etc. Most existing algorithms assume knowledge of first-order information the objective and have been analyzed for convex problems. However, there are situations where is nonconvex, one can only evaluate function values at finitely points. In this paper we consider derivative-free nonconvex optimization, based on recent progress zero-order optimization. We develop two different settings, provide detailed analysis their convergence behavior, compare them with centralized gradient-based algorithms.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solutions for Nonconvex Distributed Optimization

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solu...

متن کامل

Complexity Analysis of Second-order Line-search Algorithms for Smooth Nonconvex Optimization∗

There has been much recent interest in finding unconstrained local minima of smooth functions, due in part of the prevalence of such problems in machine learning and robust statistics. A particular focus is algorithms with good complexity guarantees. Second-order Newton-type methods that make use of regularization and trust regions have been analyzed from such a perspective. More recent proposa...

متن کامل

Parallel and Distributed Methods for Nonconvex Optimization-Part I: Theory

In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints. The algorithm solves a sequence of (separable) strongly convex problems and mantains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general...

متن کامل

Hybrid Random/Deterministic Parallel Algorithms for Nonconvex Big Data Optimization

We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of this work is a novel parallel, hybrid random/deterministic decomposition scheme wherein, at each ...

متن کامل

An Augmented Lagrangian Based Algorithm for Distributed NonConvex Optimization

This paper is about distributed derivative-based algorithms for solving optimization problems with a separable (potentially nonconvex) objective function and coupled affine constraints. A parallelizable method is proposed that combines ideas from the fields of sequential quadratic programming and augmented Lagrangian algorithms. The method negotiates shared dual variables that may be interprete...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Control of Network Systems

سال: 2021

ISSN: ['2325-5870', '2372-2533']

DOI: https://doi.org/10.1109/tcns.2020.3024321